Technical University of Munich & Max Planck Institute for Biological Cybernetics, Germany
Manuel Spitschan
Technical University of Munich & Max Planck Institute for Biological Cybernetics, Germany
Preface
This document contains the analysis and results for the event A day in daylight, where people from around the world measured a complete day of light exposure on (and around) 22 September 2025.
Note
Note that this script is optimized to generate plot outputs and objects to implement in a dashboard. Thus the direct outputs of the script might look distorted in places.
Importing data
We first set up all packages needed for the analysis
── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
✖ dplyr::filter() masks stats::filter()
✖ dplyr::lag() masks stats::lag()
✖ dplyr::src() masks Hmisc::src()
✖ dplyr::summarize() masks Hmisc::summarize()
ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
library(gt)
Attaching package: 'gt'
The following object is masked from 'package:Hmisc':
html
Attaching package: 'rlang'
The following objects are masked from 'package:purrr':
%@%, flatten, flatten_chr, flatten_dbl, flatten_int, flatten_lgl,
flatten_raw, invoke, splice
library(gganimate)library(ggdist)
Attaching package: 'ggdist'
The following object is masked from 'package:rlang':
ll
library(sf)
Linking to GEOS 3.13.0, GDAL 3.8.5, PROJ 9.5.1; sf_use_s2() is TRUE
library(rnaturalearth)library(rnaturalearthdata)
Attaching package: 'rnaturalearthdata'
The following object is masked from 'package:rnaturalearth':
countries110
Attaching package: 'scales'
The following object is masked from 'package:purrr':
discard
The following object is masked from 'package:readr':
col_factor
Next we import the survey data. Data were collected with REDCap, and there is an import script to load the data in.
source("scripts/prep_survey_data.r")
Connecting light data with survey data
First, we collect a list of available data sets. As we need to compare them to the device ids in the survey, we require only the file without path or extension
Record ID 31 did not finish the post-survey, so we lack data on that device and consequently remove it. Furthermore, Record ID 30 only has data much outside the time frame of interest. Record ID 40 don’t has sensible data and will also be removed. Likely some issue with a dead battery is the reason.
We also have to clean up the city and country, as well as latitude and longitude data. We do this separately and load the data back in. The manual entries for locations had to be cleaned. This was done with OpenAI through an API key. The results were stored in the file data/cleaned/places.csv. Uncomment the code cell below to recreate the process. Details in outcome may vary, however.
# library(ellmer)# # data_devices_red <- # data_devices |> # select(record_id, city_country, latitude, longitude)# # chat <- chat_openai("If there is more then one place specified, only use the first one. If latitude and longitude are misspecified, make a best guess based on city_country. Use IANA names for the time zone identifieres")# # #reducing each line in a table to a single string# data_devices_red <- # data_devices_red |> # pmap(~ paste(paste(names(data_devices_red), c(...), sep = ": "), collapse = ", "))# # #creating an output structure# type_place <- type_object(# record_id = type_string(),# city = type_string(),# country = type_string(),# latitude = type_number(),# longitude = type_number(),# tz_identifier = type_string(),# UTC_dev = type_number("deviation from UTC in hours, given the 22 September 2025")# )# # places <-# parallel_chat_structured(# chat,# data_devices_red,# type = type_place# )# # write.csv(places, "data/cleaned/places.csv")
#read pre-cleaned data inplaces <-read_csv("data/cleaned/places.csv")
New names:
Rows: 49 Columns: 8
── Column specification
──────────────────────────────────────────────────────── Delimiter: "," chr
(3): city, country, tz_identifier dbl (5): ...1, record_id, latitude,
longitude, UTC_dev
ℹ Use `spec()` to retrieve the full column specification for this data. ℹ
Specify the column types or set `show_col_types = FALSE` to quiet this message.
• `` -> `...1`
places <- places |> dplyr::mutate(record_id =as.character(record_id)) |>select(-`...1`)#merge data with main datadata_devices_cleaned <-data_devices |>select(-city_country, -latitude, -longitude) |>mutate(record_id =as.character(record_id)) |>left_join(places, by ="record_id") |>mutate(city =case_match(city,"Tuebingen"~"Tübingen","İzmir"~"Izmir",.default = city),country =case_match(country,"The Netherlands"~"Netherlands",c("Turkiye", "Türkiye") ~"Turkey",c("US", "United States", "USA") ~"United States of America","UK"~"United Kingdom",.default = country) )
First overview
The following code cells use the data imported so far to create some descriptive plots about the sample.
sex_lab <-primitive_bracket(key =key_range_manual( # <− positions + labelsstart =c(-7,0.1),end =c(-0.1,7),# -6 and +6 on the x-axisname =c("Males", "Females") ),position ="bottom"# draw it at the bottom of the panel)
Next, we import the light data. There are two devices in use: ActLumus and ActTrust we need to import them separately, as they are using different import functions. device_id with four number indicate the ActLumus devices, whereas with seven numbers the ActTrust. We add a column to the data indicating the Type of device in use. We also make sure that the spelling equals the supported_devices() list from LightLogR. Then we construct filename paths for all files.
data_devices_cleaned <-data_devices_cleaned |> dplyr::mutate(data = purrr::pmap(list(x = device_type, y = file_path, z = tz_identifier), \(x, y, z) {import_Dataset(device = x, filename = y, tz = z,silent =TRUE) } ) )
We end with one dataset per row entry. As the two ActTrust files do not contain a melanopic EDI column, we will use the photopic illuminance column LIGHT towards that end. As there are only two participants with this shortcoming, it will not influence results overduly.
Further, the dataset in Malaysia had a device malfunction on 22 September and only worked from the 23 September onwards. As there are minimal differences between dates and very few datasets in that region, we will not dismiss that dataset but rather shift data by one day. We need to shift the data by another 8 hours backwards, as that dataset was stored in UTC time (for some reason).
In this section we will prepare the light data through the following steps:
resampling data to 5 minute intervals
filling in missing data with explicit gaps
removing data that does not fall between 2025-09-21 10:00:00 UTC and 2025-09-23 12:00:00 UTC, which contains all times where 22 September occurs somewhere on the planet
data_devices <-data_devices_cleaned |> dplyr::mutate(data = purrr::map(data, \(x) { x |>aggregate_Datetime("5 mins") |>#resample to 5 minsgap_handler(full.days =TRUE) |>#put in explicit gapsfilter_Datetime(start ="2025-09-21 10:00:00",end ="2025-09-23 12:00:00",tz ="UTC") #cut out a section of data } ) )
Next, we add a secondary Datetime column that runs on UTC time.
In this section we deal with with the activity logs - first by filtering them out of the dataset, and selecting the relevant aspects.
events <-data |>filter(redcap_repeat_instrument =="log_a_new_activity") |>select(record_id, type.factor, social_context.factor, wear_activity.factor, nonwear_activity.factor, nighttime.factor, setting_level01.factor, setting_level02_indoors.factor, setting_level02_indoors_home.factor, setting_level02_indoors_workingspace.factor, setting_level02_indoors_healthfacility.factor, setting_level02_indoors_learningfacility.factor, setting_level02_indoors_leisurespace.factor, setting_level02_indoors_retailfacility.factor, setting_level02_mixed.factor, setting_level02_outdoors.factor, lighting_scenario_daylight___1.factor, lighting_scenario_daylight___2.factor, lighting_scenario_daylight___3.factor, lighting_scenario_daylight___4.factor, lighting_scenario_3___1.factor, lighting_scenario_3___2.factor, lighting_scenario_3___3.factor, lighting_scenario_2___1.factor, lighting_scenario_2___2.factor, lighting_scenario_2___3.factor, lighting_scenario_2___4.factor, autonomy.factor, notes, startdate, enddate )#adding labels to the factorslabel(events$type.factor) ="Wear type: Are you wearing the light logger at the moment?"label(events$social_context.factor) ="Are you alone or with others?"label(events$wear_activity.factor) ="Wear activity "label(events$nonwear_activity.factor) ="Non-wear activity"label(events$nighttime.factor) ="Where was the light logger when you were asleep?"label(events$setting_level01.factor) ="Select the setting"label(events$setting_level02_indoors.factor) ="Indoors setting"label(events$setting_level02_indoors_home.factor) ="Indoors setting (home)"label(events$setting_level02_indoors_workingspace.factor) ="Indoors setting (working space)"label(events$setting_level02_indoors_learningfacility.factor) ="Indoors setting (learning facility)"label(events$setting_level02_indoors_retailfacility.factor) ="Indoors setting (retail facility)"label(events$setting_level02_indoors_healthfacility.factor) ="Indoors setting (health facility)"label(events$setting_level02_indoors_leisurespace.factor) ="Indoors setting (leisure space)"label(events$setting_level02_outdoors.factor) ="Outdoors setting"label(events$setting_level02_mixed.factor) ="Indoors-outdoors setting"label(events$lighting_scenario_daylight___1.factor) ="Select lighting setting (daylight) (choice=Outdoors (direct sunlight))"label(events$lighting_scenario_daylight___2.factor) ="Select lighting setting (daylight) (choice=Outdoors (in shade / cloudy))"label(events$lighting_scenario_daylight___3.factor) ="Select lighting setting (daylight) (choice=Indoors (near window / exposed to daylight))"label(events$lighting_scenario_daylight___4.factor) ="Select lighting setting (daylight) (choice=Indoors (away from window))"label(events$lighting_scenario_3___1.factor) ="Select lighting setting (electric light) (choice=Lights are switched on)"label(events$lighting_scenario_3___2.factor) ="Select lighting setting (electric light) (choice=Low-light or dimmed lights)"label(events$lighting_scenario_3___3.factor) ="Select lighting setting (electric light) (choice=Completed darkness)"label(events$lighting_scenario_2___1.factor) ="Select lighting setting (screen use) (choice=Smartphone)"label(events$lighting_scenario_2___2.factor) ="Select lighting setting (screen use) (choice=Tablet)"label(events$lighting_scenario_2___3.factor) ="Select lighting setting (screen use) (choice=Computer)"label(events$lighting_scenario_2___4.factor) ="Select lighting setting (screen use) (choice=Television)"label(events$autonomy.factor) ="Were the lighting conditions in this setting self-selected (i.e., you had control over lighting intensity, spectrum, or exposure)?"
Next, we condense columns that can be expressed as one. We also lose the .factor extension, as now all doubles are removed. Finally, we simplify entries.
events <-events |>rename_with(\(x) x |>str_remove(".factor")) |>#remove .factor extension dplyr::mutate(type = type |>fct_relabel(\(x) str_remove(x, "-time| time| \\(not wearing light logger\\)")),across(c(wear_activity, setting_level01), \(x) x |>fct_relabel(\(y) str_remove(y, " \\(.*\\)")) ),nonwear_activity = nonwear_activity |>fct_recode("Dark mobile"="Left in a bag, or other mobile dark place","Dark stationary"="Left in a drawer or cabinet, or other stationary dark place","Stationary"="Left on a table or other surface with varying light exposure" ),nighttime = nighttime |>fct_recode("Upward"="Facing upward on bedside table","Downward"="Facing downward on bedside table" ),across(c(setting_level02_indoors, setting_level02_outdoors), \(x) x |>fct_recode("Leisure"="Leisure space (sports, recreation, entertainment)","Commercial"="Retail, food or services facility","Workplace"="Working space","Education"="Learning facility","Healthcare"="Health facility" ) ),setting_level01 = setting_level01 |>fct_recode("Mixed"="Indoor-outdoor setting" ),autonomy = autonomy |>fct_recode(Yes ="Yes, fully self-selected (e.g., adjusting lights at home or in a private office, moving to shaded area)",Partly ="Partly self-selected (e.g., some control such as opening blinds or switching a desk lamp, but not over main lighting)",No ="Not self-selected (e.g., public transport, shared office, classroom, hospital, airplane, etc.)",NULL ="Not applicable" ) ) |> dplyr::rename(setting_light = setting_level01)
In this step, we expand the light measurements with the event data. To this end, we need to specify start and end times for each log entry and thus state.
Lastly, a function to create the actual table. It takes both the duration_tibble() and histogram_plot() functions and creates a gt html table based on it.
duration_table <-function(variable) { variable_chr <- rlang::ensym(variable) |>as_string()duration_tibble({{ variable }}) |>arrange(desc(median)) |>gt(rowname_col = variable_chr) |>fmt_number(mean:max, decimals =0) |>cols_add(Plot = {{ variable }}, Plot2 = {{ variable }},.after = max) |>cols_add(rel_duration = total_duration/sum(total_duration), .after = total_duration) |>fmt_percent(rel_duration, decimals =0) |>cols_label(duration ="Episode duration",total_duration ="Total duration",episodes ="Episodes",mean ="Mean",min ="0%",q025 ="2.5%",q250 ="17%",median ="Median",q750 ="83%",q975 ="97.5%",max ="100%",Plot ="Histogram",Plot2 ="Timeline",rel_duration ="Relative duration" ) |>text_transform(locations =cells_body(Plot), fn = \(x) { gt::ggplot_image({ x %>%map(\(y) histogram_plot({{ variable }}, y)) }, height = gt::px(75), aspect_ratio =2) }) |>text_transform(locations =cells_body(Plot2), fn = \(x) { gt::ggplot_image({ x %>%map(\(y) timeline_plot({{ variable }}, y)) }, height = gt::px(75), aspect_ratio =2) }) |>fmt_duration(2:3, input_units ="seconds", max_output_units =2) |>tab_footnote("Scaled histograms show 66% of data in dark blue, 95% of data in dark & light blue, and the most extreme 5% of data in grey. The median is shown as a red dot.",locations =cells_column_labels(Plot) ) |>tab_footnote("Timeline plots show the median of the condition (yellow dot) against the rest of the data (grey dot). Colored bands show the interquartile range. Size of the dots indicates the relative number of instances, i.e., large dots indicate many occurances at a given time (relative to other times).",locations =cells_column_labels(Plot2) ) |>tab_footnote("Average duration of an episode in this category. eps. = Episode(s)",locations =cells_column_labels(duration) ) |>tab_footnote("An episode refers to a log entry in a category, until a different entry",locations =cells_column_labels(episodes) ) |># cols_units(mean = "lx",# median = "lx") |> gt::tab_spanner("Distribution", min:max) |>tab_footnote("Melanopic equivalent daylight illuminance (lx)",locations =list(cells_column_labels(c(mean)),cells_column_spanners() ) ) |>cols_merge_n_pct(total_duration, rel_duration) |>cols_merge(c(duration, episodes), pattern ="{1} ({2} eps.)" ) |>cols_align("center") |>tab_style(style =cell_fill(color ="grey"),locations =list(#cells_body(columns = c(min, max)),cells_column_labels(c(min, max))) ) |>tab_style(style =cell_fill(color ="tomato"),locations =list(#cells_body(median),cells_column_labels(median)) ) |>tab_style(style =cell_text(weight ="bold"),locations =list(cells_body(median),cells_stub(),cells_column_labels(median),cells_column_spanners()) ) |>tab_style(style =cell_fill(color ="#4880B8"),locations =list(#cells_body(columns = c(q250, q750)),cells_column_labels(c(q250, q750))) ) |>tab_style(style =cell_fill(color ="#C2DAE9"),locations =list(#cells_body(columns = c(q025, q975)),cells_column_labels(c(q025, q975))) ) |>cols_move_to_end(c(mean, duration, total_duration)) |>tab_header("Summary of melanopic EDI across log entries",subtitle =glue("Question/Characteristic: {label(light_data_ext[[variable_chr]])}") ) |>tab_style(style =cell_text(color ="red3"), locations =cells_stub(# columns = total_duration, # the column whose text you want to stylerows = rel_duration <0.05# condition using another column )) |>tab_footnote("Red indicates this category represents < 5% of entries",locations =list(cells_stub(rows = rel_duration <0.05) ) )}
label(light_data_ext$setting_light) <-"What is your general context?"label(light_data_ext$autonomy) <-"Were the lighting conditions in this setting self-selected?"label(light_data_ext$setting_indoors_workingspace) <-"Indoors setting (work)"label(light_data_ext$scenario_daylight2) <-"Daylight conditions (stratified)"label(light_data_ext$scenario_electric2) <-"Electric lighting conditions (stratified)"label(light_data_ext$country) <-"Country"label(light_data_ext$sex) <-"Sex"c("social_context", "type", "setting_light", "setting_light2", "country", "sex", "autonomy", "setting_indoors", "setting_location", "setting_outdoors", "setting_mixed","nighttime", "nonwear_activity", "type.detail", "setting_specific", "setting_indoors_home", "setting_indoors_workingspace", "setting_indoors_healthfacility", "setting_indoors_learningfacility", "setting_indoors_retailfacility", "scenario_daylight", "scenario_electric", "screen_phone", "screen_tablet", "screen_pc", "screen_tv", "behaviour_change", "travel_time_zone", "scenario_daylight2", "scenario_electric2", "wear_activity" )|>walk(\(x) { symbol <-sym(x) table <-duration_table(!!symbol) tablegtsave(table, glue("tables/table_duration_{x}.png"), vwidth =1200) })
Time above threshold
In this section we calculate the time above threshold for the single day of 22 September 2025 across latitude and country.
In this section we look at the global distribution of melanopic EDI, Time above 250lx, and the Dose of light.
Shade data
In this section we calculate day and night times around the globe. As we want to use this information for a looped visualization, we will set a slightly different cutoff and only collect 48 hours.
# Time window (UTC)t_start <-ymd_hms("2025-09-21 12:00:00", tz ="UTC")t_end <-ymd_hms("2025-09-23 12:00:00", tz ="UTC")# Step between frames (adjust to taste: "30 mins", "1 hour", etc.)time_step <-"30 mins"# Spatial grid resolution (degrees). 2° keeps things light; 1° looks smoother but is heavier.lon_step <-0.5lat_step <-0.5# Darkness mapping: fully dark at civil twilight (-6°), linearly increasing from 0 to 1 as altitude goes 0 -> -6°.dark_full_altitude <--6# degrees# ---------- build lon/lat grid ----------lons <-seq(-180, 180, by = lon_step)lats <-seq(-90, 90, by = lat_step)grid <-expand.grid(lon = lons, lat = lats) |>as_tibble()# ---------- time sequence ----------times_utc <-seq(t_start, t_end, by = time_step)# ---------- compute sun altitude for each (lon, lat, time) ----------# We'll loop over time slices (efficient enough for this grid size),# compute altitude (in radians from suncalc), convert to degrees,# then map to a "darkness" alpha value.
compute_slice <-function(tt) {# build a data frame with the timestamp repeated for each grid point df <-data.frame(date =rep(tt, nrow(grid)),lat = grid$lat,lon = grid$lon )# call suncalc with a data frame instead of separate lat/lon vectors sp <- suncalc::getSunlightPosition(data = df, keep =c("altitude"))# altitude is returned in radians; convert to degrees alt_deg <- sp$altitude *180/ pi# darkness mapping (same as before) darkness <-pmin(1, pmax(0, -alt_deg /6))tibble(lon = grid$lon,lat = grid$lat,time = tt,alt_deg = alt_deg,darkness = darkness )}
# Create the progress bar oncepb <- progress_bar$new(format ="Computing slices [:bar] :percent eta: :eta",total =length(times_utc), clear =FALSE, width =70)shade_list <-vector("list", length(times_utc))for (i inseq_along(times_utc)) {# Compute one time slice (without pb$tick() inside) shade_list[[i]] <-compute_slice(times_utc[[i]])# Now update the bar pb$tick()}shade_df <- dplyr::bind_rows(shade_list)shade_df <-bind_rows(lapply(times_utc, compute_slice))